8 research outputs found
Present and Future of SLAM in Extreme Underground Environments
This paper reports on the state of the art in underground SLAM by discussing
different SLAM strategies and results across six teams that participated in the
three-year-long SubT competition. In particular, the paper has four main goals.
First, we review the algorithms, architectures, and systems adopted by the
teams; particular emphasis is put on lidar-centric SLAM solutions (the go-to
approach for virtually all teams in the competition), heterogeneous multi-robot
operation (including both aerial and ground robots), and real-world underground
operation (from the presence of obscurants to the need to handle tight
computational constraints). We do not shy away from discussing the dirty
details behind the different SubT SLAM systems, which are often omitted from
technical papers. Second, we discuss the maturity of the field by highlighting
what is possible with the current SLAM systems and what we believe is within
reach with some good systems engineering. Third, we outline what we believe are
fundamental open problems, that are likely to require further research to break
through. Finally, we provide a list of open-source SLAM implementations and
datasets that have been produced during the SubT challenge and related efforts,
and constitute a useful resource for researchers and practitioners.Comment: 21 pages including references. This survey paper is submitted to IEEE
Transactions on Robotics for pre-approva
NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTARÂżs demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology,
under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and
Defense Advanced Research Projects Agency (DARPA)
LOCUS 2.0: Robust and Computationally Efficient Lidar Odometry for Real-Time Underground 3D Mapping
Lidar odometry has attracted considerable attention as a robust localization
method for autonomous robots operating in complex GNSS-denied environments.
However, achieving reliable and efficient performance on heterogeneous
platforms in large-scale environments remains an open challenge due to the
limitations of onboard computation and memory resources needed for autonomous
operation. In this work, we present LOCUS 2.0, a robust and
computationally-efficient \lidar odometry system for real-time underground 3D
mapping. LOCUS 2.0 includes a novel normals-based \morrell{Generalized
Iterative Closest Point (GICP)} formulation that reduces the computation time
of point cloud alignment, an adaptive voxel grid filter that maintains the
desired computation load regardless of the environment's geometry, and a
sliding-window map approach that bounds the memory consumption. The proposed
approach is shown to be suitable to be deployed on heterogeneous robotic
platforms involved in large-scale explorations under severe computation and
memory constraints. We demonstrate LOCUS 2.0, a key element of the CoSTAR
team's entry in the DARPA Subterranean Challenge, across various underground
scenarios.
We release LOCUS 2.0 as an open-source library and also release a
\lidar-based odometry dataset in challenging and large-scale underground
environments. The dataset features legged and wheeled platforms in multiple
environments including fog, dust, darkness, and geometrically degenerate
surroundings with a total of of operations and of distance
traveled
LAMP: Large-Scale Autonomous Mapping and Positioning for Exploration of Perceptually-Degraded Subterranean Environments
© 2020 IEEE. Simultaneous Localization and Mapping (SLAM) in large-scale, unknown, and complex subterranean environments is a challenging problem. Sensors must operate in off-nominal conditions; uneven and slippery terrains make wheel odometry inaccurate, while long corridors without salient features make exteroceptive sensing ambiguous and prone to drift; finally, spurious loop closures that are frequent in environments with repetitive appearance, such as tunnels and mines, could result in a significant distortion of the entire map. These challenges are in stark contrast with the need to build highly-accurate 3D maps to support a wide variety of applications, ranging from disaster response to the exploration of underground extraterrestrial worlds. This paper reports on the implementation and testing of a lidar-based multi-robot SLAM system developed in the context of the DARPA Subterranean Challenge. We present a system architecture to enhance subterranean operation, including an accurate lidar-based front-end, and a flexible and robust back-end that automatically rejects outlying loop closures. We present an extensive evaluation in large-scale, challenging subterranean environments, including the results obtained in the Tunnel Circuit of the DARPA Subterranean Challenge. Finally, we discuss potential improvements, limitations of the state of the art, and future research directions
LAMP 2.0: A Robust Multi-Robot SLAM System for Operation in Challenging Large-Scale Underground Environments
Search and rescue with a team of heterogeneous mobile robots in unknown and
large-scale underground environments requires high-precision localization and
mapping. This crucial requirement is faced with many challenges in complex and
perceptually-degraded subterranean environments, as the onboard perception
system is required to operate in off-nominal conditions (poor visibility due to
darkness and dust, rugged and muddy terrain, and the presence of self-similar
and ambiguous scenes). In a disaster response scenario and in the absence of
prior information about the environment, robots must rely on noisy sensor data
and perform Simultaneous Localization and Mapping (SLAM) to build a 3D map of
the environment and localize themselves and potential survivors. To that end,
this paper reports on a multi-robot SLAM system developed by team CoSTAR in the
context of the DARPA Subterranean Challenge. We extend our previous work, LAMP,
by incorporating a single-robot front-end interface that is adaptable to
different odometry sources and lidar configurations, a scalable multi-robot
front-end to support inter- and intra-robot loop closure detection for large
scale environments and multi-robot teams, and a robust back-end equipped with
an outlier-resilient pose graph optimization based on Graduated Non-Convexity.
We provide a detailed ablation study on the multi-robot front-end and back-end,
and assess the overall system performance in challenging real-world datasets
collected across mines, power plants, and caves in the United States. We also
release our multi-robot back-end datasets (and the corresponding ground truth),
which can serve as challenging benchmarks for large-scale underground SLAM